Goals

Overview

Prerequisites

Installation

Setup your local machine

Here is the description of the steps to be followed in order to setup the environment and run the demo

  1. Install the Openshift Client on your machine (Optional)

    1. Download the tar.gz file and uncompress the content locally

    2. Edit your bash file to add a variable OPENSHIFT_HOME and specify the location of the content unzipped

      export OPENSHIFT_HOME=~/MyApplications/openshift-origin-v1.0.6
      export PATH=$OPENSHIFT_HOME:$PATH
    3. Source your bash file tobe able to run oc command within one of your terminal

  2. Install GoFabric8 Client binary on your machine (Optional)

    1. Download the tar.gz file and uncompress the content locally

    2. Edit your bash file to add a variable GOFABRIC8_HOME and specify the location of the content unzipped

      export GOFABRIC8_HOME=~/MyApplications/gofabric8-0.3.13
      export PATH=$GOFABRIC8_HOME:$PATH
    3. Source your bash file to be able to run oc command within one of your terminal

  3. Configure the routing between your local machine and the VM Box

    1. Add the routes used by the macosx machine to access the Pods/Docker containers running within the VM by issuing these commands within a terminal

      sudo route -n delete 172.0.0.0/8
      sudo route -n add 172.0.0.0/8  172.28.128.4

Install Vagrant config file

  1. Clone the Fabric8-installer git project or download/unzip it from this repository address

  2. Unzip the content

  3. Open a Terminal and move to the directory of fabric8-installer/vagrant/openshift

  4. Start Vagrant using this command

    vagrant up

    When the VirtualBox machine has been started and created successfully, you should be able to see this message within the console

    ==> default: --------------------------------------------------------------
    ==> default: Fabric8 pod is running! Who-hoo!
    ==> default: --------------------------------------------------------------
    ==> default:
    ==> default: Now open the fabric8 console at:
    ==> default:
    ==> default:     http://fabric8.vagrant.f8/
    ==> default:
    ==> default: When you first open your browser Chrome will say:
    ==> default:
    ==> default:    Your connection is not private
    ==> default:
    ==> default: * Don't panic!
    ==> default: * Click on the small 'Advanced' link on the bottom left
    ==> default: * Now click on the link that says 'Proceed to fabric8.vagrant.f8 (unsafe)' bottom left
    ==> default: * Now the browser should redirect to the login page. Enter admin/admin
    ==> default: * You should now be in the main fabric8 console. That was easy eh! :)
    ==> default: * Make sure you start off in the 'default' namespace.
    ==> default:
    ==> default: To install more applications click the Run... button on the Apps tab.
    ==> default:
    ==> default: We love feedback: http://fabric8.io/community/
    ==> default: Havefun!
    ==> default:
    ==> default: Now open the fabric8 console at:
    ==> default:
    ==> default:     http://fabric8.vagrant.f8/
    ==> default:
    ==> default: --------------------------------------------------------------
    ==> default: deploymentconfigs/docker-registry
    ==> default: services/docker-registry

Install Snapshot of CD-Pipeline

As a snapshot version of the Fabric8 Forge Docker image is required, we have to build it and deploy the docker image into OSv3. Move to the directory of the project abric8-devops/fabric8-forge and run these commands within a local terminal.

unset DOCKER_CERT_PATH
unset DOCKER_TLS_VERIFY
export DOCKER_HOST=tcp://172.28.128.4:2375
export KUBERNETES_NAMESPACE=default
export KUBERNETES_MASTER=https://172.28.128.4:8443
export KUBERNETES_DOMAIN=vagrant.f8
export KUBERNETES_TRUST_CERT="true"
oc login -u admin -p admin https://172.28.128.4:8443

cd fabric8-devops/fabric8-forge
mvn clean install docker:build fabric8:recreate

Install the docker images

Return to the terminal opened wihin this directory fabric8-installer/vagrant/openshift and ssh to your vagrant machine. As the cd-pipeline kube apps require the docker images of gerrit, gogs, jenkins, nexus, …, we will install them by issuing these commands within the vagrant box

vagrant ssh

docker pull fabric8/fabric8-console
docker pull fabric8/nexus
docker pull fabric8/jenkernetes
docker pull fabric8/gerrit
docker pull tpires/sonar-server

Install the cd-pipeline template

Next, we will deploy the template of the cd-pipeline application using the project abric8-devops. So, move to this project fabric8-devops/packages/cd-pipeline and run these commands from a terminal opened on your machine as vagrant can’t access to this file generated

cd /Users/chmoulli/Fuse/Fuse-projects/fabric8/fabric8-devops-cloned/packages/cd-pipeline

unset DOCKER_CERT_PATH
unset DOCKER_TLS_VERIFY
export DOCKER_HOST=tcp://172.28.128.4:2375
export KUBERNETES_NAMESPACE=default
export KUBERNETES_MASTER=https://172.28.128.4:8443
export KUBERNETES_DOMAIN=vagrant.f8
export KUBERNETES_TRUST_CERT="true"
oc login -u admin -p admin https://172.28.128.4:8443
mvn clean install fabric8:recreate

OR

oc process -f /Users/chmoulli/.m2/repository/io/fabric8/devops/packages/cd-pipeline/2.2.35-SNAPSHOT/cd-pipeline-2.2.35-SNAPSHOT-kubernetes.json | oc create -f -

Issue with keys

As the public keys generated by gofabric8 can’t be used within the docker container created and specifically for gerrit, we will generate locally new keys and import them into the OSV3 platform

Remark : The name of the folders correspond to the names of the keys defined within the gerrit kube app project

  1. Run this command within this emea-2015 project to generate the keys and import them in OSv3

    ./demo/scripts/gen_keys_import.sh
  2. Check that the keys have been imported correctly (subl is a shortcut to open sublime text editor)

    oc get -o json secret gerrit-admin-ssh | subl &
    oc get -o json secret gerrit-users-ssh-keys | subl &

Redeploy Gerrit to use the correct secrets

  1. First remove the gerrit-site folder created previously on the vagrant machine

    sudo rm -rf /home/gerrit-site
  2. Next, move to the gerrit directory of the project fabric8-devops/gerrit and redeploy the template

    mvn clean install abric8:recreate
  3. Open your browser at this address http://localhost:8080/#/settings/ssh-keys and you should be able to see the ssh keys imported for the users admin, jenkins and sonar

Tricks

  • Edit the keys

oc edit secret/gerrit-admin-ssh -o json
oc edit secret/gerrit-users-ssh-keys -o json
oc edit template/cd-pipeline -o json
  • To display it

oc get -o json secret gerrit-admin-ssh | subl &
oc get -o json secret gerrit-users-ssh-keys | subl &

oc get -o json template cd-pipeline
oc get -o json template gerrit
  • To delete a template

oc delete template cd-pipeline
oc delete secrets gerrit-admin-ssh

Open your browser and access the Fabric8 console at this address http://fabric8.vagrant.f8/. The login/password to be used is admin/admin

  • Setup ENV vars to access Docker or Openshift daemons running within the Virtualbox machine

    • Define for the HOST macosx the docker daemon which runs within the Vagrant VM Box and kubernetes env vars

    • Run these commands within a terminal

      unset DOCKER_CERT_PATH
      unset DOCKER_TLS_VERIFY
      export DOCKER_HOST=tcp://vagrant.f8:2375
      export KUBERNETES_NAMESPACE=default
      export KUBERNETES_MASTER=https://vagrant.f8:8443
      export KUBERNETES_DOMAIN=vagrant.f8
      export KUBERNETES_TRUST_CERT="true"
  • Or run this bash script

source ./demo/scripts/set_kubernetes_env.sh 172.28.128.4
  • Authenticate the Openshift Client with the Openshift platform and select default as domain

oc project default
oc login -u admin -p admin https://172.28.128.4:8443

or

./scripts/authenticate_with_os.sh

Download Fabric 8 Kubernetes templates

cd target
curl -o fabric8.zip http://repo1.maven.org/maven2/io/fabric8/apps/distro/2.2.19/distro-2.2.19-templates.zip
unzip fabric8.zip

Deploy the Fabric8 Continuous Delivery application

oc process -v DOMAIN='vagrant.f8' -f main/cdelivery-2.2.19.json  | oc create -f -
Don’t worry about such messages as the elasticsearch, elasticsearch-cluster & kibana kube apps have alsready been deployed when we have started the Virtualbox

Control Deployment

  • Control that the Fabric8 Pods & Services have been created

oc get pods
oc get services

oc get svc
NAME              LABELS                                     SELECTOR                                   IP(S)            PORT(S)
docker-registry   docker-registry=default                    docker-registry=default                    172.30.136.53    5000/TCP
elasticsearch     component=elasticsearch,provider=fabric8   component=elasticsearch,provider=fabric8   172.30.74.191    9200/TCP
fabric8           component=console,provider=fabric8         component=console,provider=fabric8         172.30.218.102   80/TCP
fabric8-forge     component=fabric8Forge,provider=fabric8    component=fabric8Forge,provider=fabric8    172.30.127.171   80/TCP
gerrit            component=gerrit,provider=fabric8          component=gerrit,provider=fabric8          172.30.153.170   80/TCP
gerrit-ssh        component=gerrit,provider=fabric8          component=gerrit,provider=fabric8          172.30.128.61    29418/TCP
gogs              component=gogs,provider=fabric8            component=gogs,provider=fabric8            172.30.209.199   80/TCP
gogs-ssh          component=gogs,provider=fabric8            component=gogs,provider=fabric8            172.30.255.164   22/TCP
jenkins           component=jenkins,provider=fabric8         component=jenkins,provider=fabric8         172.30.119.13    80/TCP
kibana            component=kibana,provider=fabric8          component=kibana,provider=fabric8          172.30.16.216    80/TCP
kubernetes        component=apiserver,provider=kubernetes    <none>                                     172.30.0.2       443/TCP
kubernetes-ro     component=apiserver,provider=kubernetes    <none>                                     172.30.0.1       80/TCP
nexus             component=nexus,provider=fabric8           component=nexus,provider=fabric8           172.30.126.22    80/TCP
router            router=router                              router=router                              172.30.165.182   80/TCP


oc get pods
NAME                      READY     REASON    RESTARTS   AGE
docker-registry-1-rr459   1/1       Running   0          44m
elasticsearch-mb3fv       2/2       Running   0          22m
fabric8-0upsk             1/1       Running   0          22m
fabric8-forge-2ma9j       1/1       Running   0          22m
gerrit-ctobk              1/1       Running   0          22m
gogs-148m9                1/1       Running   0          22m
jenkins-29e5i             1/1       Running   0          22m
kibana-zfgyf              1/1       Running   0          22m
nexus-1fsnz               1/1       Running   0          22m
router-1-9us2r            1/1       Running   0          44m
  • If the gerrit service is not there, then check that its json file contains the service. If this is not the case, then rebuild it

mvn clean fabric8:json install
  • As it seems that the routes are not created by default, we have to recreate them So run ths script and check that the routes are created

./scripts/rebuildroutes.sh

oc get routes
NAME                    HOST/PORT                       PATH      SERVICE           LABELS
docker-registry         docker-registry.vagrant.local             docker-registry
docker-registry-route   docker-registry.vagrant.local             docker-registry

elasticsearch           elasticsearch.vagrant.local               elasticsearch

fabric8                 fabric8.vagrant.local                     fabric8
fabric8-forge           fabric8-forge.vagrant.local               fabric8-forge
gogs                    gogs.vagrant.local                        gogs
gogs-ssh                gogs-ssh.vagrant.local                    gogs-ssh
jenkins                 jenkins.vagrant.local                     jenkins
kibana                  kibana.vagrant.local                      kibana
nexus                   nexus.vagrant.local                       nexus
router                  router.vagrant.local                      router
  • We can verify now that nexus, gerrit, gogs & jenkins servers are running. So open a web browser with these addresses

chrome http://gogs.vagrant.f8
chrome http://jenkins.vagrant.f8
chrome http://nexus.vagrant.f8
chrome http://gerrit.vagrant.f8
chrome http://fabric8.vagrant.f8

Create a CD/CI project

  • Open the Fabric8 Web console and select the "Projects" tab

fabric8 project 1
  • Encode the login/password to access Gogs (gogsadmin/RedHat$1 & gogsadmin@fabric8.local)

fabric8 git login
  • From this view, click on the button "create project", a new screen will be displayed where you can encode the name of the project (= name of the git repo, jenkins dsl pipeline, …), the package name & version to be used Remark : The build system can’t be changed for the moment and is maven like the type "From Archetype catalog"

fabric8 project 4
  • Click on execute and within the next screen, you will be able to select from the maven catalog the archetype to be used "io.fabric8.archetypes:java-camel-cdi-archetype:2.2.0" using the catalog of "fabric8". Click on execute to request the creation of the seed, jobs & git repos

fabric8 project 6
  • When the project is created, you will be redirected to this screen

fabric8 project 7
  • Review what has been created in jenkins, gogs, gerrit & fabric8

fabric8 project 9
Figure 1. Git repo created into Gogs
gerrit 4
Figure 2. Git repo created in Gerrit Review Application
jenkins 1a
Figure 3. Jenkins jobs for the project created (it, dev, deploy)
jenkins 1b
Figure 4. Jenkins console output
jenkins 2
Figure 5. Fabric8 CD/CI Pipeline created from the project
  • Clone the Git Gogs repo using a git command issued in a terminal to make a change & start a review process

   git clone http://gogs.vagrant.f8/gogsadmin/devnation.git
   Cloning into 'devnation'...
   remote: Counting objects: 24, done.
   remote: Compressing objects: 100% (16/16), done.
   remote: Total 24 (delta 2), reused 0 (delta 0)
   Unpacking objects: 100% (24/24), done.
   Checking connectivity... done.
  • Add Gerrit Review hook to the project

In order to use the git review branch created within the gerrit git repo, we will add the branch, modify the git hook message in order to generate a unique commit-id message.

Run the script and pass as parameter the directory name of the project to be created locally on your machine and the gerrit git repository (should be by example : devnation)

  /scripts/review.sh devnation devnation

   /path/to/the/script/scripts/review.sh devnation devnation
   Counting objects: 24, done.
   Delta compression using up to 8 threads.
   Compressing objects: 100% (16/16), done.
   Writing objects: 100% (24/24), 6.11 KiB | 0 bytes/s, done.
   Total 24 (delta 2), reused 0 (delta 0)
   remote: Resolving deltas: 100% (2/2)
   remote: Processing changes: refs: 1, done
   To http://admin@gerrit.vagrant.f8/devnation
    * [new branch]      master -> master
     % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                    Dload  Upload   Total   Spent    Left  Speed
   100  4360  100  4360    0     0    867      0  0:00:05  0:00:05 -:--:--  304k
  • Commit a change

Within the terminal where you have cloned the gogs repo, edit the file README.md and change the text. Next commit it and push the result to origin branch

git commit -m "First commit" -a
[master d53d106] First commit
 1 file changed, 2 insertions(+)
dabou:~/Temp/test-devnation/devnation$ git push review
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 399 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1)
remote: Processing changes: new: 1, refs: 1, done
remote:
remote: New Changes:
remote:   http://localhost:8080/1 First commit
remote:
  • Review the change and accept it within Gerrit

gerrit review1
gerrit review2
gerrit review3
gerrit review4
  • Check that the modification has been replicated with Gogs

gogs review
  • Start the pipeline

Return to the jenkins web server and start the pipeline of the project. After a few moments, you will see that the different jobs have succeeded.

jenkins 2
jenkins 3
jenkins 5
jenkins 6

When the job devnation-ci is finished and the project has been compiled, then you will be able to retrieve the code within the Nexus repo

nexus

And when the Docker image of the project has been created, Fabric8 will deploy it on Openshift and you will be able to access the Apache Camel route deployed

You can access to the application deployed using the Fabric8 Kubernetes view. Select the application and click on the button start/open

fabric8 project 11
camel docker

Enjoy your First Apache Camel Docker experience with Openshift Fabric8 technology & our CD/CI strategy !!

Camel Quickstart

  1. Show the project camel-servlet within the quickstart project

  2. Analyze the properties of the pom.xml file

  3. Checkout the master or tag 2.1.6 of the quickstart app camel servlet war & compile/build/deploy the kube App & docker image

  4. Deploy the project

    mvn install docker:build fabric8:json fabric8:apply
  5. Run command fabric8:json

    mvn clean fabric8:json compile

Deploy the Camel Servlet WAR example

+

mvn fabric8:apply -Dfabric8.recreate=true -Dfabric8.domain=vagrant.f8

+ . The application Camel Web Servlet is accessible : http://quickstart-camelservlet.vagrant.f8/

Troubleshooting

To access a docker container

    docker exec -it $(docker ps | grep 'fabric8/gerrit' | cut -f1 -d" ") bash

Get pods, services, …

    oc get pods -l provider=fabric8
    oc get rc -l provider=fabric8
    oc get svc -l provider=fabric8
    oc get oauthclients | grep fabric8

Delete pods, services & replica


    oc delete rc -l provider=fabric8
    oc delete pods -l provider=fabric8
    oc delete svc -l provider=fabric8
    oc delete oauthclients fabric8

Delete containers & image

    docker rm $(docker ps -a | grep gerrit)
    docker rmi $(docker images | grep gerrit)

Delete PODS using Fabric8 plugin

    mvn fabric8:delete-pods

For more see http://fabric8.io/guide/mavenFabric8DeletePods.html

Delete the Fabric8 App

osc delete rc -l provider=fabric8
osc delete pods -l provider=fabric8
osc delete svc -l provider=fabric8
osc delete oauthclients fabric8

osc get pods -l provider=fabric8
osc get rc -l provider=fabric8
osc get svc -l provider=fabric8
osc get oauthclients | grep fabric8

Delete the containers & images

docker rm $(docker ps -a | grep fabric8)
docker rmi $(docker images | grep fabric8)

Install Base and CDelivery

os process -f http://central.maven.org/maven2/io/fabric8/apps/base/2.2.23.1/base-2.2.23.1-kubernetes.json | os create -f -
oc process -f http://central.maven.org/maven2/io/fabric8/apps/cdelivery-core/2.2.23.1/cdelivery-core-2.2.23.1-kubernetes.json | oc create -f -
oc process -f /Users/chmoulli/.m2/repository/io/fabric8/devops/apps/gerrit/2.2.31-SNAPSHOT/gerrit-2.2.31-SNAPSHOT-kubernetes.json | oc create -f -

Compile & Deploy a project

mvn clean fabric8:json compile
mvn fabric8:apply -Dfabric8.recreate=true -Dfabric8.domain=vagrant.local

docker exec -it $(docker ps | grep 'fabric8/gerrit' | cut -f1 -d" ") bash
docker stop $(docker ps | grep 'fabric8/gerrit' | cut -f1 -d" ")

docker exec -it $(docker ps | grep 'fabric8/gogs' | cut -f1 -d" ") bash

Check logs of journalctl

    sudo journalctl -r -u openshift
    sudo journalctl -r -u docker